Projected Gradient Descent for Non-Convex Sparse Spike Estimation
نویسندگان
چکیده
منابع مشابه
Non-Convex Projected Gradient Descent for Generalized Low-Rank Tensor Regression
In this paper, we consider the problem of learning high-dimensional tensor regression problems with low-rank structure. One of the core challenges associated with learning high-dimensional models is computation since the underlying optimization problems are often non-convex. While convex relaxations could lead to polynomialtime algorithms they are often slow in practice. On the other hand, limi...
متن کاملProvable non-convex projected gradient descent for a class of constrained matrix optimization problems
We consider the minimization of convex loss functions over the set of positive semi-definite(PSD) matrices, that are also norm-constrained. Such criteria appear in quantum state tomographyand phase retrieval applications, among others. However, without careful design, existing methodsquickly run into time and memory bottlenecks, as problem dimensions increase.However, the ma...
متن کاملSpectral Compressed Sensing via Projected Gradient Descent
Let x ∈ C be a spectrally sparse signal consisting of r complex sinusoids with or without damping. We consider the spectral compressed sensing problem, which is about reconstructing x from its partial revealed entries. By utilizing the low rank structure of the Hankel matrix corresponding to x, we develop a computationally efficient algorithm for this problem. The algorithm starts from an initi...
متن کاملSparse Communication for Distributed Gradient Descent
We make distributed stochastic gradient descent faster by exchanging sparse updates instead of dense updates. Gradient updates are positively skewed as most updates are near zero, so we map the 99% smallest updates (by absolute value) to zero then exchange sparse matrices. This method can be combined with quantization to further improve the compression. We explore different configurations and a...
متن کاملMirror descent and nonlinear projected subgradient methods for convex optimization
The mirror descent algorithm (MDA) was introduced by Nemirovsky and Yudin for solving convex optimization problems. This method exhibits an e3ciency estimate that is mildly dependent in the decision variables dimension, and thus suitable for solving very large scale optimization problems. We present a new derivation and analysis of this algorithm. We show that the MDA can be viewed as a nonline...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: IEEE Signal Processing Letters
سال: 2020
ISSN: 1070-9908,1558-2361
DOI: 10.1109/lsp.2020.3003241